2025-04-03 22:51:50,061 [ 90694 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-04-03 22:51:50,061 [ 90694 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:97, check_args_and_update_paths) 2025-04-03 22:51:50,061 [ 90694 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:108, check_args_and_update_paths) 2025-04-03 22:51:50,061 [ 90694 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:110, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_bayf19 --privileged --dns-search='.' --memory=30709030912 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config test_always_fetch_merged/test.py::test_replica_always_download test_async_insert_memory/test.py::test_memory_usage test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication test_backup_restore/test.py::test_attach_partition test_backup_restore/test.py::test_replace_partition test_backup_restore/test.py::test_restore 'test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False]' 'test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True]' 'test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True]' test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes test_backup_restore_on_cluster/test.py::test_empty_replicated_table test_backup_restore_on_cluster/test.py::test_file_deduplication test_backup_restore_on_cluster/test.py::test_get_error_from_other_host test_backup_restore_on_cluster/test.py::test_keeper_value_max_size test_backup_restore_on_cluster/test.py::test_mutation test_backup_restore_on_cluster/test.py::test_projection test_backup_restore_on_cluster/test.py::test_replicated_database test_backup_restore_on_cluster/test.py::test_replicated_database_async test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts 'test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database]' 'test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid]' test_backup_restore_on_cluster/test.py::test_replicated_table test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath test_backup_restore_on_cluster/test.py::test_required_privileges test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup test_backup_restore_on_cluster/test.py::test_system_functions test_backup_restore_on_cluster/test.py::test_system_users test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty test_backup_restore_on_cluster/test.py::test_tables_dependency test_cluster_all_replicas/test.py::test_cluster 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes]' test_cluster_all_replicas/test.py::test_global_in 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes]' test_compression_nested_columns/test.py::test_nested_compression_codec test_concurrent_queries_restriction_by_query_kind/test.py::test_insert test_concurrent_queries_restriction_by_query_kind/test.py::test_select test_config_decryption/test_wrong_settings.py::test_invalid_chars test_config_decryption/test_wrong_settings.py::test_no_encryption_key test_config_decryption/test_wrong_settings.py::test_subnodes test_config_decryption/test_wrong_settings.py::test_wrong_method test_config_xml_main/test.py::test_xml_main_conf test_config_yaml_main/test.py::test_yaml_main_conf test_create_query_constraints/test.py::test_create_query_const_constraints test_create_query_constraints/test.py::test_create_query_minmax_constraints test_custom_settings/test.py::test_custom_settings test_custom_settings/test.py::test_illformed_setting test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica test_ddl_worker_replicas/test.py::test_ddl_worker_replicas test_default_role/test.py::test_alter_user test_default_role/test.py::test_set_default_roles test_default_role/test.py::test_wrong_set_default_role 'test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache]' 'test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed]' 'test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0]' 'test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1]' 'test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0]' 'test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1]' 'test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0]' 'test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1]' 'test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0]' 'test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1]' test_dictionaries_dependency/test.py::test_no_lazy_load 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed]' 'test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache]' 'test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache]' 'test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache]' test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation test_disk_access_storage/test.py::test_alter -vvv" altinityinfra/integration-tests-runner:cd6390247eca '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: random-0.2, timeout-2.2.0, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0] test_cluster_all_replicas/test.py::test_cluster test_accept_invalid_certificate/test.py::test_accept test_backup_restore/test.py::test_attach_partition test_config_decryption/test_wrong_settings.py::test_invalid_chars test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed] test_attach_partition_using_copy/test.py::test_all_replicated test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache] test_default_role/test.py::test_alter_user test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False] [gw7] [ 1%] PASSED test_config_decryption/test_wrong_settings.py::test_invalid_chars test_config_decryption/test_wrong_settings.py::test_no_encryption_key [gw5] [ 2%] PASSED test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept [gw5] [ 3%] PASSED test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default [gw5] [ 4%] PASSED test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject [gw5] [ 5%] PASSED test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject [gw3] [ 6%] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed] test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed] [gw5] [ 7%] PASSED test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config [gw5] [ 8%] PASSED test_accept_invalid_certificate/test.py::test_strict_reject_with_config [gw8] [ 9%] PASSED test_default_role/test.py::test_alter_user test_default_role/test.py::test_set_default_roles [gw3] [ 10%] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed] test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat] [gw3] [ 11%] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat] test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed] [gw3] [ 12%] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed] test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed] [gw3] [ 13%] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed] test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache] [gw8] [ 14%] PASSED test_default_role/test.py::test_set_default_roles test_default_role/test.py::test_wrong_set_default_role test_concurrent_queries_restriction_by_query_kind/test.py::test_insert [gw2] [ 15%] PASSED test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False] [gw8] [ 16%] PASSED test_default_role/test.py::test_wrong_set_default_role [gw3] [ 17%] PASSED test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache] test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache] [gw9] [ 18%] PASSED test_backup_restore/test.py::test_attach_partition test_backup_restore/test.py::test_replace_partition test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True] [gw4] [ 19%] PASSED test_cluster_all_replicas/test.py::test_cluster test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] [gw3] [ 20%] PASSED test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache] test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache] [gw7] [ 21%] PASSED test_config_decryption/test_wrong_settings.py::test_no_encryption_key test_config_decryption/test_wrong_settings.py::test_subnodes test_create_query_constraints/test.py::test_create_query_const_constraints [gw2] [ 22%] PASSED test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True] [gw3] [ 23%] PASSED test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache] test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True] [gw9] [ 24%] PASSED test_backup_restore/test.py::test_replace_partition test_backup_restore/test.py::test_restore [gw4] [ 25%] PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache] [gw2] [ 26%] PASSED test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True] [gw1] [ 27%] PASSED test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0] test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica [gw9] [ 28%] PASSED test_backup_restore/test.py::test_restore [gw4] [ 29%] PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] test_cluster_all_replicas/test.py::test_global_in [gw5] [ 30%] PASSED test_concurrent_queries_restriction_by_query_kind/test.py::test_insert test_concurrent_queries_restriction_by_query_kind/test.py::test_select [gw4] [ 31%] PASSED test_cluster_all_replicas/test.py::test_global_in test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] [gw7] [ 32%] PASSED test_config_decryption/test_wrong_settings.py::test_subnodes test_config_decryption/test_wrong_settings.py::test_wrong_method test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1] [gw2] [ 33%] PASSED test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica test_always_fetch_merged/test.py::test_replica_always_download [gw8] [ 34%] PASSED test_create_query_constraints/test.py::test_create_query_const_constraints test_create_query_constraints/test.py::test_create_query_minmax_constraints test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes [gw4] [ 35%] PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] [gw8] [ 36%] PASSED test_create_query_constraints/test.py::test_create_query_minmax_constraints [gw2] [ 37%] PASSED test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes test_backup_restore_on_cluster/test.py::test_empty_replicated_table test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field [gw2] [ 38%] PASSED test_backup_restore_on_cluster/test.py::test_empty_replicated_table test_backup_restore_on_cluster/test.py::test_file_deduplication [gw7] [ 39%] PASSED test_config_decryption/test_wrong_settings.py::test_wrong_method test_async_insert_memory/test.py::test_memory_usage [gw4] [ 40%] PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] [gw2] [ 41%] PASSED test_backup_restore_on_cluster/test.py::test_file_deduplication [gw0] [ 42%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache] test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct] [gw1] [ 43%] PASSED test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1] test_backup_restore_on_cluster/test.py::test_get_error_from_other_host [gw2] [ 44%] PASSED test_backup_restore_on_cluster/test.py::test_get_error_from_other_host [gw5] [ 45%] PASSED test_concurrent_queries_restriction_by_query_kind/test.py::test_select test_backup_restore_on_cluster/test.py::test_keeper_value_max_size test_config_xml_main/test.py::test_xml_main_conf test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0] test_custom_settings/test.py::test_custom_settings [gw2] [ 46%] PASSED test_backup_restore_on_cluster/test.py::test_keeper_value_max_size [gw9] [ 47%] PASSED test_always_fetch_merged/test.py::test_replica_always_download test_backup_restore_on_cluster/test.py::test_mutation [gw8] [ 48%] PASSED test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication [gw2] [ 49%] PASSED test_backup_restore_on_cluster/test.py::test_mutation test_backup_restore_on_cluster/test.py::test_projection [gw1] [ 50%] PASSED test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0] test_ddl_worker_replicas/test.py::test_ddl_worker_replicas [gw5] [ 51%] PASSED test_custom_settings/test.py::test_custom_settings test_custom_settings/test.py::test_illformed_setting [gw5] [ 52%] PASSED test_custom_settings/test.py::test_illformed_setting [gw3] [ 53%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache] test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache] [gw2] [ 54%] PASSED test_backup_restore_on_cluster/test.py::test_projection test_backup_restore_on_cluster/test.py::test_replicated_database test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1] [gw4] [ 55%] PASSED test_config_xml_main/test.py::test_xml_main_conf test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation [gw2] [ 56%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database test_backup_restore_on_cluster/test.py::test_replicated_database_async [gw1] [ 57%] PASSED test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1] [gw0] [ 58%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct] test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed] [gw4] [ 59%] PASSED test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation [gw2] [ 60%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_async test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0] [gw2] [ 61%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database] [gw1] [ 62%] PASSED test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0] [gw2] [ 63%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database] test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid] test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1] [gw2] [ 64%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid] test_backup_restore_on_cluster/test.py::test_replicated_table [gw3] [ 65%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache] [gw2] [ 66%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table test_compression_nested_columns/test.py::test_nested_compression_codec [gw1] [ 67%] PASSED test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1] test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster [gw2] [ 68%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0] test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster [gw0] [ 69%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed] test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed] [gw2] [ 70%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def [gw2] [ 71%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert [gw3] [ 72%] PASSED test_compression_nested_columns/test.py::test_nested_compression_codec [gw1] [ 73%] PASSED test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0] [gw2] [ 74%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert [gw0] [ 75%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed] test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache] test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1] [gw2] [ 76%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath [gw2] [ 77%] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath test_backup_restore_on_cluster/test.py::test_required_privileges [gw2] [ 78%] PASSED test_backup_restore_on_cluster/test.py::test_required_privileges [gw1] [ 79%] PASSED test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1] test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup [gw8] [ 80%] PASSED test_ddl_worker_replicas/test.py::test_ddl_worker_replicas test_dictionaries_dependency/test.py::test_no_lazy_load test_disk_access_storage/test.py::test_alter [gw2] [ 81%] PASSED test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup test_backup_restore_on_cluster/test.py::test_system_functions [gw0] [ 82%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache] test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct] [gw2] [ 83%] PASSED test_backup_restore_on_cluster/test.py::test_system_functions test_backup_restore_on_cluster/test.py::test_system_users [gw2] [ 84%] PASSED test_backup_restore_on_cluster/test.py::test_system_users test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def [gw2] [ 85%] PASSED test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty [gw2] [ 86%] PASSED test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty [gw8] [ 87%] PASSED test_disk_access_storage/test.py::test_alter test_backup_restore_on_cluster/test.py::test_tables_dependency [gw2] [ 88%] PASSED test_backup_restore_on_cluster/test.py::test_tables_dependency [gw1] [ 89%] PASSED test_dictionaries_dependency/test.py::test_no_lazy_load [gw0] [ 90%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct] test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat] [gw9] [ 91%] PASSED test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication [gw7] [ 92%] PASSED test_async_insert_memory/test.py::test_memory_usage test_config_yaml_main/test.py::test_yaml_main_conf test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica [gw7] [ 93%] PASSED test_config_yaml_main/test.py::test_yaml_main_conf [gw0] [ 94%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat] test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed] [gw9] [ 95%] PASSED test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica [gw0] [ 96%] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed] [gw6] [ 97%] FAILED test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree [gw6] [ 98%] FAILED test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk [gw6] [ 99%] FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated [gw6] [100%] FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated =================================== FAILURES =================================== _____________________________ test_all_replicated ______________________________ [gw6] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_all_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", True) test_attach_partition_using_copy/test.py:126: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( helpers/cluster.py:3713: in query_with_retry result = self.query( helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() helpers/client.py:230: in get_answer self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) /usr/lib/python3.10/subprocess.py:1209: in wait return self._wait(timeout=timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = timeout = 600 def _wait(self, timeout): """Internal implementation of wait() on POSIX.""" if self.returncode is not None: return self.returncode if timeout is not None: endtime = _time() + timeout # Enter a busy loop if we have a timeout. This busy loop was # cribbed from Lib/threading.py in Thread.wait() at r71065. delay = 0.0005 # 500 us -> initial delay of 1 ms while True: if self._waitpid_lock.acquire(False): try: if self.returncode is not None: break # Another thread waited. (pid, sts) = self._try_wait(os.WNOHANG) assert pid == self.pid or pid == 0 if pid == self.pid: self._handle_exitstatus(sts) break finally: self._waitpid_lock.release() remaining = self._remaining_time(endtime) if remaining <= 0: raise TimeoutExpired(self.args, timeout) delay = min(delay * 2, remaining, .05) > time.sleep(delay) E Failed: Timeout >900.0s /usr/lib/python3.10/subprocess.py:1953: Failed ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ---------------------------- Captured stderr setup ----------------------------- Command:[docker ps | wc -l] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 Running tests in /ClickHouse/tests/integration/test_attach_partition_using_copy/test.py Cluster start called. is_up=False Docker networks for project roottestattachpartitionusingcopy-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestattachpartitionusingcopy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestattachpartitionusingcopy-gw6 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestattachpartitionusingcopy-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestattachpartitionusingcopy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestattachpartitionusingcopy-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestattachpartitionusingcopy-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: replica1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/database Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: replica2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/database Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:6712d5cc610d', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml pull] Stderr: replica2 Skipped - Image is already being pulled by zoo3 Stderr: replica1 Skipped - Image is already being pulled by zoo3 Stderr: zoo1 Skipped - Image is already being pulled by zoo3 Stderr: zoo2 Skipped - Image is already being pulled by zoo3 Stderr: zoo3 Pulling Stderr: zoo3 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/coordination'] Command:[docker compose --project-name roottestattachpartitionusingcopy-gw6 --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr:time="2025-04-03T22:52:07Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestattachpartitionusingcopy-gw6_default Creating Stderr: Network roottestattachpartitionusingcopy-gw6_default Created Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Created Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Created Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Created Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Started Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Started Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Started Stderr:time="2025-04-03T22:52:09Z" level=debug msg="otel error" error="" Stderr:time="2025-04-03T22:52:09Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.8.2, port:2181, use_ssl:False Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.8.3, port:2181, use_ssl:False Connecting to 172.16.8.3(172.16.8.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.8.4, port:2181, use_ssl:False Connecting to 172.16.8.4(172.16.8.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml up -d --no-recreate] Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Running Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Running Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Running Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Creating Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Created Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Created Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Starting Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Started Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Started ClickHouse instance created get_instance_ip instance_name=replica1 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica1 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica1, ip: 172.16.8.6... http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None ClickHouse replica1 started get_instance_ip instance_name=replica2 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica2 http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica2, ip: 172.16.8.5... http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3d09c3de19a069297a3ce38bc20f0020fa32840cffe871ad23df5055b58055a2/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3d09c3de19a069297a3ce38bc20f0020fa32840cffe871ad23df5055b58055a2/json HTTP/1.1" 200 None ClickHouse replica2 started ------------------------------ Captured log setup ------------------------------ 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:122, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : No running containers (conftest.py:96, cleanup_environment) 2025-04-03 22:51:56 [ 694 ] DEBUG : Pruning Docker networks (conftest.py:98, cleanup_environment) 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[docker network prune --force] (cluster.py:122, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:122, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:146, run_and_check) 2025-04-03 22:51:56 [ 694 ] INFO : Running tests in /ClickHouse/tests/integration/test_attach_partition_using_copy/test.py (cluster.py:2793, start) 2025-04-03 22:51:56 [ 694 ] DEBUG : Cluster start called. is_up=False (cluster.py:2800, start) 2025-04-03 22:51:56 [ 694 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw6 are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-03 22:51:56 [ 694 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-03 22:51:56 [ 694 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw6 are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-03 22:51:56 [ 694 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw6 are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-03 22:51:56 [ 694 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-03 22:51:56 [ 694 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw6 are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw6 (cluster.py:922, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-03 22:51:56 [ 694 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) 2025-04-03 22:51:56 [ 694 ] DEBUG : Setup directory for instance: replica1 (cluster.py:2813, start) 2025-04-03 22:51:56 [ 694 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/configs/config.d (cluster.py:4752, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/database (cluster.py:4769, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/logs (cluster.py:4780, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Setup directory for instance: replica2 (cluster.py:2813, start) 2025-04-03 22:51:56 [ 694 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/configs/config.d (cluster.py:4752, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/database (cluster.py:4769, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/logs (cluster.py:4780, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4864, create_dir) 2025-04-03 22:51:56 [ 694 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:6712d5cc610d', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env (cluster.py:97, _create_env_file) 2025-04-03 22:51:56 [ 694 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-03 22:51:56 [ 694 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-03 22:51:56 [ 694 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-03 22:51:56 [ 694 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-03 22:51:56 [ 694 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-04-03 22:51:56 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml pull] (cluster.py:122, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Stderr: replica2 Skipped - Image is already being pulled by zoo3 (cluster.py:148, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Stderr: replica1 Skipped - Image is already being pulled by zoo3 (cluster.py:148, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Stderr: zoo1 Skipped - Image is already being pulled by zoo3 (cluster.py:148, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Stderr: zoo2 Skipped - Image is already being pulled by zoo3 (cluster.py:148, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Stderr: zoo3 Pulling (cluster.py:148, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Stderr: zoo3 Pulled (cluster.py:148, run_and_check) 2025-04-03 22:52:07 [ 694 ] DEBUG : Setup ZooKeeper (cluster.py:2854, start) 2025-04-03 22:52:07 [ 694 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper1/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper2/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/keeper3/coordination'] (cluster.py:2855, start) 2025-04-03 22:52:07 [ 694 ] DEBUG : Command:[docker compose --project-name roottestattachpartitionusingcopy-gw6 --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr:time="2025-04-03T22:52:07Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw6_default Creating (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw6_default Created (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Creating (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Creating (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Creating (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Created (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Created (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Created (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Starting (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Starting (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Starting (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Started (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Started (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Started (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr:time="2025-04-03T22:52:09Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Stderr:time="2025-04-03T22:52:09Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-03 22:52:09 [ 694 ] DEBUG : Wait ZooKeeper to start (cluster.py:2466, wait_zookeeper_to_start) 2025-04-03 22:52:09 [ 694 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2082, get_instance_ip) 2025-04-03 22:52:09 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-zoo1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:09 [ 694 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.8.2, port:2181, use_ssl:False (cluster.py:3341, get_kazoo_client) 2025-04-03 22:52:09 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:09 [ 694 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-03 22:52:09 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:09 [ 694 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-03 22:52:09 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:09 [ 694 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-03 22:52:09 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:09 [ 694 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-03 22:52:10 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:10 [ 694 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-03 22:52:12 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:12 [ 694 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] INFO : Connecting to 172.16.8.2(172.16.8.2):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-04-03 22:52:13 [ 694 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-04-03 22:52:13 [ 694 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2082, get_instance_ip) 2025-04-03 22:52:13 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-zoo2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:13 [ 694 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.8.3, port:2181, use_ssl:False (cluster.py:3341, get_kazoo_client) 2025-04-03 22:52:13 [ 694 ] INFO : Connecting to 172.16.8.3(172.16.8.3):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-04-03 22:52:13 [ 694 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-04-03 22:52:13 [ 694 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2082, get_instance_ip) 2025-04-03 22:52:13 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-zoo3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:13 [ 694 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.8.4, port:2181, use_ssl:False (cluster.py:3341, get_kazoo_client) 2025-04-03 22:52:13 [ 694 ] INFO : Connecting to 172.16.8.4(172.16.8.4):2181, use_ssl: False (connection.py:650, _connect) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-04-03 22:52:13 [ 694 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-04-03 22:52:13 [ 694 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-04-03 22:52:13 [ 694 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-04-03 22:52:14 [ 694 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-04-03 22:52:14 [ 694 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-04-03 22:52:14 [ 694 ] DEBUG : All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') (cluster.py:2482, wait_zookeeper_nodes_to_start) 2025-04-03 22:52:14 [ 694 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml up -d --no-recreate') (cluster.py:3200, start) 2025-04-03 22:52:14 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml up -d --no-recreate] (cluster.py:122, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Running (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Running (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Running (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Creating (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Creating (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Created (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Created (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Starting (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Starting (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Started (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Started (cluster.py:148, run_and_check) 2025-04-03 22:52:14 [ 694 ] DEBUG : ClickHouse instance created (cluster.py:3208, start) 2025-04-03 22:52:14 [ 694 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2082, get_instance_ip) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:14 [ 694 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2092, get_instance_global_ipv6) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:14 [ 694 ] DEBUG : Waiting for ClickHouse start in replica1, ip: 172.16.8.6... (cluster.py:3216, start) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:14 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:15 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/2ae35f707f2cd8027f5bdcf522bc991c986cbb9e8215d29459325b643aa06d1e/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : ClickHouse replica1 started (cluster.py:3220, start) 2025-04-03 22:52:16 [ 694 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2082, get_instance_ip) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2092, get_instance_global_ipv6) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : Waiting for ClickHouse start in replica2, ip: 172.16.8.5... (cluster.py:3216, start) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw6-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/3d09c3de19a069297a3ce38bc20f0020fa32840cffe871ad23df5055b58055a2/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : http://localhost:None "GET /v1.46/containers/3d09c3de19a069297a3ce38bc20f0020fa32840cffe871ad23df5055b58055a2/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-03 22:52:16 [ 694 ] DEBUG : ClickHouse replica2 started (cluster.py:3220, start) ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ~~~~~~~~~~~~~~~~~~~~~ Stack of (140107603838528) ~~~~~~~~~~~~~~~~~~~~~ File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 411, in _perform_spawn reply.run() File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 341, in run self._result = func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 1160, in _thread_receiver msg = Message.from_io(io) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 567, in from_io header = io.read(9) # type 1, channel 4, payload 4 File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 534, in read data = self._read(numbytes - len(buf)) ------------------------------ Captured log call ------------------------------- 2025-04-03 22:52:16 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3677, query) 2025-04-03 22:52:17 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3677, query) 2025-04-03 22:52:17 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3677, query) 2025-04-03 22:52:17 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3677, query) 2025-04-03 22:52:18 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:53:11 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:54:06 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:55:01 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:55:58 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:56:56 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:57:51 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:58:46 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 22:59:43 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:00:41 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:01:36 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:02:31 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:03:28 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:04:26 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:05:20 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:06:16 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) _____________________________ test_both_mergetree ______________________________ [gw6] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_both_mergetree(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:104: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( helpers/cluster.py:3713: in query_with_retry result = self.query( helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() helpers/client.py:230: in get_answer self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) /usr/lib/python3.10/subprocess.py:1209: in wait return self._wait(timeout=timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = timeout = 600 def _wait(self, timeout): """Internal implementation of wait() on POSIX.""" if self.returncode is not None: return self.returncode if timeout is not None: endtime = _time() + timeout # Enter a busy loop if we have a timeout. This busy loop was # cribbed from Lib/threading.py in Thread.wait() at r71065. delay = 0.0005 # 500 us -> initial delay of 1 ms while True: if self._waitpid_lock.acquire(False): try: if self.returncode is not None: break # Another thread waited. (pid, sts) = self._try_wait(os.WNOHANG) assert pid == self.pid or pid == 0 if pid == self.pid: self._handle_exitstatus(sts) break finally: self._waitpid_lock.release() remaining = self._remaining_time(endtime) if remaining <= 0: raise TimeoutExpired(self.args, timeout) delay = min(delay * 2, remaining, .05) > time.sleep(delay) E Failed: Timeout >900.0s /usr/lib/python3.10/subprocess.py:1953: Failed ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ~~~~~~~~~~~~~~~~~~~~~ Stack of (140107603838528) ~~~~~~~~~~~~~~~~~~~~~ File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 411, in _perform_spawn reply.run() File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 341, in run self._result = func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 1160, in _thread_receiver msg = Message.from_io(io) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 567, in from_io header = io.read(9) # type 1, channel 4, payload 4 File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 534, in read data = self._read(numbytes - len(buf)) ------------------------------ Captured log call ------------------------------- 2025-04-03 23:06:56 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3677, query) 2025-04-03 23:07:12 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3677, query) 2025-04-03 23:07:12 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3677, query) 2025-04-03 23:07:12 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3677, query) 2025-04-03 23:07:13 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:08:11 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:09:05 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:10:00 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:10:57 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:11:55 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:12:50 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:13:45 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:14:42 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:15:40 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:16:34 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:17:30 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:18:27 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:19:24 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:20:19 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:21:14 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) _______________________ test_not_work_on_different_disk ________________________ [gw6] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_not_work_on_different_disk(start_cluster): cleanup([replica1, replica2]) # Replace and move should not work on replace > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:197: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( helpers/cluster.py:3713: in query_with_retry result = self.query( helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() helpers/client.py:230: in get_answer self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) /usr/lib/python3.10/subprocess.py:1209: in wait return self._wait(timeout=timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = timeout = 600 def _wait(self, timeout): """Internal implementation of wait() on POSIX.""" if self.returncode is not None: return self.returncode if timeout is not None: endtime = _time() + timeout # Enter a busy loop if we have a timeout. This busy loop was # cribbed from Lib/threading.py in Thread.wait() at r71065. delay = 0.0005 # 500 us -> initial delay of 1 ms while True: if self._waitpid_lock.acquire(False): try: if self.returncode is not None: break # Another thread waited. (pid, sts) = self._try_wait(os.WNOHANG) assert pid == self.pid or pid == 0 if pid == self.pid: self._handle_exitstatus(sts) break finally: self._waitpid_lock.release() remaining = self._remaining_time(endtime) if remaining <= 0: raise TimeoutExpired(self.args, timeout) delay = min(delay * 2, remaining, .05) > time.sleep(delay) E Failed: Timeout >900.0s /usr/lib/python3.10/subprocess.py:1953: Failed ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ~~~~~~~~~~~~~~~~~~~~~ Stack of (140107603838528) ~~~~~~~~~~~~~~~~~~~~~ File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 411, in _perform_spawn reply.run() File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 341, in run self._result = func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 1160, in _thread_receiver msg = Message.from_io(io) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 567, in from_io header = io.read(9) # type 1, channel 4, payload 4 File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 534, in read data = self._read(numbytes - len(buf)) ------------------------------ Captured log call ------------------------------- 2025-04-03 23:21:56 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3677, query) 2025-04-03 23:22:11 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3677, query) 2025-04-03 23:22:11 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3677, query) 2025-04-03 23:22:11 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3677, query) 2025-04-03 23:22:12 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:23:09 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:24:04 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:24:59 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:25:56 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:26:54 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:27:49 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:28:44 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:29:41 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:30:39 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:31:33 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:32:29 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:33:26 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:34:23 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:35:18 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:36:13 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) _______________________ test_only_destination_replicated _______________________ [gw6] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_only_destination_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:161: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( helpers/cluster.py:3713: in query_with_retry result = self.query( helpers/cluster.py:3678: in query return self.client.query( helpers/client.py:39: in wrap return func(self, *args, **kwargs) helpers/client.py:79: in query ).get_answer() helpers/client.py:230: in get_answer self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) /usr/lib/python3.10/subprocess.py:1209: in wait return self._wait(timeout=timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = timeout = 600 def _wait(self, timeout): """Internal implementation of wait() on POSIX.""" if self.returncode is not None: return self.returncode if timeout is not None: endtime = _time() + timeout # Enter a busy loop if we have a timeout. This busy loop was # cribbed from Lib/threading.py in Thread.wait() at r71065. delay = 0.0005 # 500 us -> initial delay of 1 ms while True: if self._waitpid_lock.acquire(False): try: if self.returncode is not None: break # Another thread waited. (pid, sts) = self._try_wait(os.WNOHANG) assert pid == self.pid or pid == 0 if pid == self.pid: self._handle_exitstatus(sts) break finally: self._waitpid_lock.release() remaining = self._remaining_time(endtime) if remaining <= 0: raise TimeoutExpired(self.args, timeout) delay = min(delay * 2, remaining, .05) > time.sleep(delay) E Failed: Timeout >900.0s /usr/lib/python3.10/subprocess.py:1953: Failed ----------------------------- Captured stderr call ----------------------------- Executing query DROP TABLE IF EXISTS source SYNC on replica1 Executing query DROP TABLE IF EXISTS destination SYNC on replica1 Executing query DROP TABLE IF EXISTS source SYNC on replica2 Executing query DROP TABLE IF EXISTS destination SYNC on replica2 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 ~~~~~~~~~~~~~~~~~~~~~ Stack of (140107603838528) ~~~~~~~~~~~~~~~~~~~~~ File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 411, in _perform_spawn reply.run() File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 341, in run self._result = func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 1160, in _thread_receiver msg = Message.from_io(io) File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 567, in from_io header = io.read(9) # type 1, channel 4, payload 4 File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 534, in read data = self._read(numbytes - len(buf)) ------------------------------ Captured log call ------------------------------- 2025-04-03 23:36:56 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3677, query) 2025-04-03 23:37:10 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3677, query) 2025-04-03 23:37:10 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3677, query) 2025-04-03 23:37:10 [ 694 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3677, query) 2025-04-03 23:37:10 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:38:08 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:39:03 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:39:58 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:40:55 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:41:53 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:42:47 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:43:43 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:44:40 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:45:37 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:46:32 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:47:27 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:48:24 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:49:22 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:50:17 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) 2025-04-03 23:51:12 [ 694 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3677, query) --------------------------- Captured stderr teardown --------------------------- Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml stop --timeout 20] Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml down --volumes] Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopping Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopped Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Removing Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Removed Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Removed Stderr: Network roottestattachpartitionusingcopy-gw6_default Removing Stderr: Network roottestattachpartitionusingcopy-gw6_default Removed Cleanup called Docker networks for project roottestattachpartitionusingcopy-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestattachpartitionusingcopy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestattachpartitionusingcopy-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestattachpartitionusingcopy-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 ---------------------------- Captured log teardown ----------------------------- 2025-04-03 23:51:56 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml stop --timeout 20] (cluster.py:122, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:122, run_and_check) 2025-04-03 23:52:04 [ 694 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/.env --project-name roottestattachpartitionusingcopy-gw6 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw6/replica2/docker-compose.yml down --volumes] (cluster.py:122, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Removing (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Removing (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica1-1 Removed (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-replica2-1 Removed (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopping (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Removing (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Removing (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Stopped (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Removing (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo1-1 Removed (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo2-1 Removed (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw6-zoo3-1 Removed (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw6_default Removing (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw6_default Removed (cluster.py:148, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw6 are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-03 23:52:05 [ 694 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-03 23:52:05 [ 694 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw6 are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-03 23:52:05 [ 694 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw6 (cluster.py:922, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:146, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Images pruned (cluster.py:947, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-03 23:52:05 [ 694 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-03 23:52:05 [ 694 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) ============================== slowest durations =============================== 900.00s call test_attach_partition_using_copy/test.py::test_both_mergetree 900.00s call test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 900.00s call test_attach_partition_using_copy/test.py::test_only_destination_replicated 879.23s call test_attach_partition_using_copy/test.py::test_all_replicated 197.87s call test_async_insert_memory/test.py::test_memory_usage 175.68s setup test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication 86.92s call test_ddl_worker_replicas/test.py::test_ddl_worker_replicas 52.11s call test_dictionaries_dependency/test.py::test_no_lazy_load 46.79s call test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache] 44.31s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache] 44.16s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct] 43.59s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct] 41.74s call test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache] 40.68s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat] 40.60s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed] 40.26s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed] 39.91s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache] 24.39s setup test_cluster_all_replicas/test.py::test_cluster 22.24s call test_config_xml_main/test.py::test_xml_main_conf 22.22s call test_disk_access_storage/test.py::test_alter 21.94s teardown test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation 21.69s call test_concurrent_queries_restriction_by_query_kind/test.py::test_select 20.78s setup test_attach_partition_using_copy/test.py::test_all_replicated 19.64s setup test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False] 19.40s call test_config_yaml_main/test.py::test_yaml_main_conf 19.13s setup test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 19.13s teardown test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication 18.76s setup test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0] 18.38s setup test_compression_nested_columns/test.py::test_nested_compression_codec 18.27s call test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup 17.75s call test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0] 17.52s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache] 17.52s call test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed] 17.33s call test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1] 17.22s call test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0] 17.02s setup test_ddl_worker_replicas/test.py::test_ddl_worker_replicas 16.99s call test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1] 16.33s setup test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed] 16.29s setup test_backup_restore/test.py::test_attach_partition 15.91s setup test_accept_invalid_certificate/test.py::test_accept 15.50s setup test_always_fetch_merged/test.py::test_replica_always_download 15.25s call test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1] 15.06s setup test_default_role/test.py::test_alter_user 14.80s setup test_disk_access_storage/test.py::test_alter 14.32s call test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0] 14.09s teardown test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica 13.61s call test_config_decryption/test_wrong_settings.py::test_invalid_chars 13.49s call test_config_decryption/test_wrong_settings.py::test_wrong_method 13.40s call test_config_decryption/test_wrong_settings.py::test_subnodes 13.03s call test_config_decryption/test_wrong_settings.py::test_no_encryption_key 12.57s setup test_create_query_constraints/test.py::test_create_query_const_constraints 12.56s setup test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation 12.22s setup test_custom_settings/test.py::test_custom_settings 12.16s call test_always_fetch_merged/test.py::test_replica_always_download 11.70s call test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1] 11.25s setup test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica 10.92s call test_concurrent_queries_restriction_by_query_kind/test.py::test_insert 10.56s setup test_async_insert_memory/test.py::test_memory_usage 9.78s teardown test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] 9.49s call test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0] 9.05s call test_backup_restore_on_cluster/test.py::test_replicated_database_async 8.85s teardown test_backup_restore_on_cluster/test.py::test_tables_dependency 8.51s teardown test_attach_partition_using_copy/test.py::test_only_destination_replicated 8.25s call test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] 7.68s call test_backup_restore/test.py::test_attach_partition 7.58s call test_backup_restore_on_cluster/test.py::test_tables_dependency 7.53s call test_backup_restore/test.py::test_replace_partition 7.50s teardown test_dictionaries_dependency/test.py::test_no_lazy_load 7.26s call test_backup_restore_on_cluster/test.py::test_required_privileges 6.83s call test_backup_restore/test.py::test_restore 6.81s call test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts 6.74s call test_backup_restore_on_cluster/test.py::test_replicated_database 6.61s teardown test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0] 6.53s call test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] 6.51s teardown test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1] 6.47s call test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] 6.45s call test_backup_restore_on_cluster/test.py::test_keeper_value_max_size 6.44s teardown test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1] 6.42s teardown test_always_fetch_merged/test.py::test_replica_always_download 6.32s teardown test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1] 6.26s setup test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache] 6.24s call test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] 6.13s teardown test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0] 6.08s call test_backup_restore_on_cluster/test.py::test_mutation 6.04s setup test_concurrent_queries_restriction_by_query_kind/test.py::test_insert 5.99s teardown test_ddl_worker_replicas/test.py::test_ddl_worker_replicas 5.92s teardown test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0] 5.81s call test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 5.72s call test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert 5.66s teardown test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0] 5.60s teardown test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1] 5.40s call test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def 5.26s call test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def 5.20s call test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica 5.11s call test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid] 5.10s teardown test_compression_nested_columns/test.py::test_nested_compression_codec 4.99s call test_backup_restore_on_cluster/test.py::test_replicated_table 4.96s call test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database] 4.96s teardown test_concurrent_queries_restriction_by_query_kind/test.py::test_select 4.95s teardown test_accept_invalid_certificate/test.py::test_strict_reject_with_config 4.89s teardown test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache] 4.78s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed] 4.73s call test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation 4.71s call test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge 4.63s call test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True] 4.61s call test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True] 4.54s teardown test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 4.53s teardown test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache] 4.43s call test_backup_restore_on_cluster/test.py::test_system_users 4.42s call test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica 4.34s teardown test_create_query_constraints/test.py::test_create_query_minmax_constraints 4.34s call test_backup_restore_on_cluster/test.py::test_system_functions 4.23s call test_backup_restore_on_cluster/test.py::test_projection 4.18s teardown test_default_role/test.py::test_wrong_set_default_role 4.09s call test_backup_restore_on_cluster/test.py::test_file_deduplication 4.04s call test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath 3.85s call test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty 3.84s teardown test_async_insert_memory/test.py::test_memory_usage 3.71s call test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster 3.71s teardown test_backup_restore/test.py::test_restore 3.58s call test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False] 3.42s call test_default_role/test.py::test_set_default_roles 3.41s call test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication 3.29s call test_create_query_constraints/test.py::test_create_query_minmax_constraints 3.19s call test_compression_nested_columns/test.py::test_nested_compression_codec 3.17s call test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache] 3.12s call test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache] 3.09s call test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache] 3.05s teardown test_custom_settings/test.py::test_illformed_setting 2.86s call test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster 2.85s call test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes 2.77s call test_default_role/test.py::test_alter_user 2.76s call test_backup_restore_on_cluster/test.py::test_empty_replicated_table 2.61s call test_create_query_constraints/test.py::test_create_query_const_constraints 2.48s teardown test_backup_restore_on_cluster/test.py::test_mutation 2.35s teardown test_backup_restore_on_cluster/test.py::test_replicated_database_async 2.29s teardown test_backup_restore_on_cluster/test.py::test_replicated_database 2.23s teardown test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database] 2.08s teardown test_backup_restore_on_cluster/test.py::test_empty_replicated_table 2.03s teardown test_backup_restore_on_cluster/test.py::test_projection 2.03s teardown test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert 1.98s teardown test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica 1.93s teardown test_backup_restore_on_cluster/test.py::test_get_error_from_other_host 1.93s teardown test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid] 1.90s teardown test_backup_restore_on_cluster/test.py::test_keeper_value_max_size 1.89s teardown test_backup_restore_on_cluster/test.py::test_required_privileges 1.89s teardown test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster 1.88s teardown test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge 1.84s teardown test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath 1.84s teardown test_backup_restore_on_cluster/test.py::test_file_deduplication 1.84s teardown test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts 1.83s teardown test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster 1.83s teardown test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty 1.83s teardown test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def 1.79s teardown test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def 1.78s teardown test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup 1.78s teardown test_backup_restore_on_cluster/test.py::test_replicated_table 1.78s teardown test_backup_restore_on_cluster/test.py::test_system_functions 1.73s teardown test_backup_restore_on_cluster/test.py::test_system_users 1.73s teardown test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True] 1.69s call test_custom_settings/test.py::test_custom_settings 1.63s teardown test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False] 1.63s teardown test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True] 1.63s teardown test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes 1.42s teardown test_disk_access_storage/test.py::test_alter 1.34s call test_default_role/test.py::test_wrong_set_default_role 1.30s call test_backup_restore_on_cluster/test.py::test_get_error_from_other_host 1.21s call test_cluster_all_replicas/test.py::test_cluster 0.90s call test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat] 0.85s call test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed] 0.80s call test_cluster_all_replicas/test.py::test_global_in 0.75s call test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed] 0.75s call test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed] 0.70s call test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed] 0.53s call test_custom_settings/test.py::test_illformed_setting 0.47s setup test_default_role/test.py::test_wrong_set_default_role 0.32s call test_accept_invalid_certificate/test.py::test_accept 0.32s call test_accept_invalid_certificate/test.py::test_connection_accept 0.27s call test_accept_invalid_certificate/test.py::test_strict_reject_with_config 0.27s call test_accept_invalid_certificate/test.py::test_strict_reject 0.22s call test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.22s setup test_default_role/test.py::test_set_default_roles 0.17s call test_accept_invalid_certificate/test.py::test_default 0.04s setup test_config_decryption/test_wrong_settings.py::test_invalid_chars 0.00s setup test_backup_restore_on_cluster/test.py::test_mutation 0.00s setup test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] 0.00s setup test_backup_restore_on_cluster/test.py::test_file_deduplication 0.00s setup test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True] 0.00s setup test_backup_restore_on_cluster/test.py::test_system_functions 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert 0.00s setup test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True] 0.00s setup test_backup_restore_on_cluster/test.py::test_system_users 0.00s setup test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache] 0.00s setup test_config_yaml_main/test.py::test_yaml_main_conf 0.00s setup test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1] 0.00s setup test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache] 0.00s teardown test_config_yaml_main/test.py::test_yaml_main_conf 0.00s setup test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0] 0.00s setup test_backup_restore_on_cluster/test.py::test_projection 0.00s setup test_dictionaries_dependency/test.py::test_no_lazy_load 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge 0.00s setup test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0] 0.00s setup test_config_xml_main/test.py::test_xml_main_conf 0.00s setup test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1] 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache] 0.00s setup test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache] 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid] 0.00s setup test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1] 0.00s setup test_backup_restore/test.py::test_replace_partition 0.00s setup test_config_decryption/test_wrong_settings.py::test_wrong_method 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_database 0.00s setup test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache] 0.00s teardown test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.00s setup test_backup_restore_on_cluster/test.py::test_get_error_from_other_host 0.00s setup test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def 0.00s setup test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0] 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_database_async 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache] 0.00s setup test_backup_restore_on_cluster/test.py::test_empty_replicated_table 0.00s setup test_concurrent_queries_restriction_by_query_kind/test.py::test_select 0.00s setup test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes 0.00s setup test_create_query_constraints/test.py::test_create_query_minmax_constraints 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def 0.00s setup test_backup_restore/test.py::test_restore 0.00s setup test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] 0.00s setup test_backup_restore_on_cluster/test.py::test_keeper_value_max_size 0.00s setup test_accept_invalid_certificate/test.py::test_strict_reject 0.00s setup test_backup_restore_on_cluster/test.py::test_tables_dependency 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed] 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath 0.00s setup test_accept_invalid_certificate/test.py::test_strict_reject_with_config 0.00s setup test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache] 0.00s teardown test_default_role/test.py::test_alter_user 0.00s teardown test_config_xml_main/test.py::test_xml_main_conf 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat] 0.00s setup test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat] 0.00s setup test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] 0.00s setup test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica 0.00s teardown test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct] 0.00s teardown test_cluster_all_replicas/test.py::test_cluster 0.00s teardown test_attach_partition_using_copy/test.py::test_all_replicated 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct] 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster 0.00s setup test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache] 0.00s setup test_backup_restore_on_cluster/test.py::test_required_privileges 0.00s teardown test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] 0.00s setup test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed] 0.00s setup test_custom_settings/test.py::test_illformed_setting 0.00s teardown test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed] 0.00s teardown test_config_decryption/test_wrong_settings.py::test_wrong_method 0.00s setup test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s teardown test_create_query_constraints/test.py::test_create_query_const_constraints 0.00s setup test_cluster_all_replicas/test.py::test_global_in 0.00s setup test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed] 0.00s teardown test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache] 0.00s teardown test_config_decryption/test_wrong_settings.py::test_no_encryption_key 0.00s setup test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup 0.00s teardown test_default_role/test.py::test_set_default_roles 0.00s setup test_config_decryption/test_wrong_settings.py::test_subnodes 0.00s teardown test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] 0.00s teardown test_concurrent_queries_restriction_by_query_kind/test.py::test_insert 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed] 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed] 0.00s setup test_accept_invalid_certificate/test.py::test_connection_accept 0.00s setup test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts 0.00s setup test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed] 0.00s setup test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed] 0.00s teardown test_config_decryption/test_wrong_settings.py::test_subnodes 0.00s setup test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat] 0.00s teardown test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct] 0.00s teardown test_accept_invalid_certificate/test.py::test_strict_reject 0.00s teardown test_custom_settings/test.py::test_custom_settings 0.00s teardown test_backup_restore/test.py::test_replace_partition 0.00s teardown test_config_decryption/test_wrong_settings.py::test_invalid_chars 0.00s teardown test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s teardown test_accept_invalid_certificate/test.py::test_accept 0.00s teardown test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct] 0.00s teardown test_backup_restore/test.py::test_attach_partition 0.00s setup test_config_decryption/test_wrong_settings.py::test_no_encryption_key 0.00s setup test_accept_invalid_certificate/test.py::test_default 0.00s teardown test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed] 0.00s teardown test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] 0.00s teardown test_accept_invalid_certificate/test.py::test_connection_accept 0.00s setup test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.00s setup test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s setup test_attach_partition_using_copy/test.py::test_only_destination_replicated 0.00s teardown test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed] 0.00s teardown test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat] 0.00s teardown test_cluster_all_replicas/test.py::test_global_in 0.00s teardown test_accept_invalid_certificate/test.py::test_default =========================== short test summary info ============================ FAILED test_attach_partition_using_copy/test.py::test_all_replicated - Failed... FAILED test_attach_partition_using_copy/test.py::test_both_mergetree - Failed... FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated PASSED test_config_decryption/test_wrong_settings.py::test_invalid_chars PASSED test_accept_invalid_certificate/test.py::test_accept PASSED test_accept_invalid_certificate/test.py::test_connection_accept PASSED test_accept_invalid_certificate/test.py::test_default PASSED test_accept_invalid_certificate/test.py::test_strict_connection_reject PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed] PASSED test_accept_invalid_certificate/test.py::test_strict_reject PASSED test_accept_invalid_certificate/test.py::test_strict_reject_with_config PASSED test_default_role/test.py::test_alter_user PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed] PASSED test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed] PASSED test_default_role/test.py::test_set_default_roles PASSED test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False] PASSED test_default_role/test.py::test_wrong_set_default_role PASSED test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache] PASSED test_backup_restore/test.py::test_attach_partition PASSED test_cluster_all_replicas/test.py::test_cluster PASSED test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache] PASSED test_config_decryption/test_wrong_settings.py::test_no_encryption_key PASSED test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True] PASSED test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache] PASSED test_backup_restore/test.py::test_replace_partition PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] PASSED test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True] PASSED test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0] PASSED test_backup_restore/test.py::test_restore PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] PASSED test_concurrent_queries_restriction_by_query_kind/test.py::test_insert PASSED test_cluster_all_replicas/test.py::test_global_in PASSED test_config_decryption/test_wrong_settings.py::test_subnodes PASSED test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica PASSED test_create_query_constraints/test.py::test_create_query_const_constraints PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] PASSED test_create_query_constraints/test.py::test_create_query_minmax_constraints PASSED test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes PASSED test_backup_restore_on_cluster/test.py::test_empty_replicated_table PASSED test_config_decryption/test_wrong_settings.py::test_wrong_method PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] PASSED test_backup_restore_on_cluster/test.py::test_file_deduplication PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache] PASSED test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1] PASSED test_backup_restore_on_cluster/test.py::test_get_error_from_other_host PASSED test_concurrent_queries_restriction_by_query_kind/test.py::test_select PASSED test_backup_restore_on_cluster/test.py::test_keeper_value_max_size PASSED test_always_fetch_merged/test.py::test_replica_always_download PASSED test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field PASSED test_backup_restore_on_cluster/test.py::test_mutation PASSED test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0] PASSED test_custom_settings/test.py::test_custom_settings PASSED test_custom_settings/test.py::test_illformed_setting PASSED test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache] PASSED test_backup_restore_on_cluster/test.py::test_projection PASSED test_config_xml_main/test.py::test_xml_main_conf PASSED test_backup_restore_on_cluster/test.py::test_replicated_database PASSED test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1] PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct] PASSED test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_async PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts PASSED test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database] PASSED test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table PASSED test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def PASSED test_compression_nested_columns/test.py::test_nested_compression_codec PASSED test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed] PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge PASSED test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath PASSED test_backup_restore_on_cluster/test.py::test_required_privileges PASSED test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1] PASSED test_ddl_worker_replicas/test.py::test_ddl_worker_replicas PASSED test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache] PASSED test_backup_restore_on_cluster/test.py::test_system_functions PASSED test_backup_restore_on_cluster/test.py::test_system_users PASSED test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def PASSED test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty PASSED test_disk_access_storage/test.py::test_alter PASSED test_backup_restore_on_cluster/test.py::test_tables_dependency PASSED test_dictionaries_dependency/test.py::test_no_lazy_load PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct] PASSED test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication PASSED test_async_insert_memory/test.py::test_memory_usage PASSED test_config_yaml_main/test.py::test_yaml_main_conf PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat] PASSED test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica PASSED test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed] ================== 4 failed, 96 passed in 3611.27s (1:00:11) =================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 528, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_bayf19 --privileged --dns-search='.' --memory=30709030912 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config test_always_fetch_merged/test.py::test_replica_always_download test_async_insert_memory/test.py::test_memory_usage test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated test_azure_blob_storage_zero_copy_replication/test.py::test_zero_copy_replication test_backup_restore/test.py::test_attach_partition test_backup_restore/test.py::test_replace_partition test_backup_restore/test.py::test_restore 'test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-False]' 'test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[http-True]' 'test_backup_restore_on_cluster/test.py::test_async_backups_to_same_destination[native-True]' test_backup_restore_on_cluster/test.py::test_backup_restore_on_single_replica test_backup_restore_on_cluster/test.py::test_different_tables_on_nodes test_backup_restore_on_cluster/test.py::test_empty_replicated_table test_backup_restore_on_cluster/test.py::test_file_deduplication test_backup_restore_on_cluster/test.py::test_get_error_from_other_host test_backup_restore_on_cluster/test.py::test_keeper_value_max_size test_backup_restore_on_cluster/test.py::test_mutation test_backup_restore_on_cluster/test.py::test_projection test_backup_restore_on_cluster/test.py::test_replicated_database test_backup_restore_on_cluster/test.py::test_replicated_database_async test_backup_restore_on_cluster/test.py::test_replicated_database_compare_parts 'test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[database]' 'test_backup_restore_on_cluster/test.py::test_replicated_database_with_special_macro_in_zk_path[uuid]' test_backup_restore_on_cluster/test.py::test_replicated_table test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_bigger_cluster test_backup_restore_on_cluster/test.py::test_replicated_table_restored_into_smaller_cluster test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_def test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_insert test_backup_restore_on_cluster/test.py::test_replicated_table_with_not_synced_merge test_backup_restore_on_cluster/test.py::test_replicated_table_with_uuid_in_zkpath test_backup_restore_on_cluster/test.py::test_required_privileges test_backup_restore_on_cluster/test.py::test_shutdown_waits_for_backup test_backup_restore_on_cluster/test.py::test_system_functions test_backup_restore_on_cluster/test.py::test_system_users test_backup_restore_on_cluster/test.py::test_table_in_replicated_database_with_not_synced_def test_backup_restore_on_cluster/test.py::test_table_with_parts_in_queue_considered_non_empty test_backup_restore_on_cluster/test.py::test_tables_dependency test_cluster_all_replicas/test.py::test_cluster 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes]' test_cluster_all_replicas/test.py::test_global_in 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes]' test_compression_nested_columns/test.py::test_nested_compression_codec test_concurrent_queries_restriction_by_query_kind/test.py::test_insert test_concurrent_queries_restriction_by_query_kind/test.py::test_select test_config_decryption/test_wrong_settings.py::test_invalid_chars test_config_decryption/test_wrong_settings.py::test_no_encryption_key test_config_decryption/test_wrong_settings.py::test_subnodes test_config_decryption/test_wrong_settings.py::test_wrong_method test_config_xml_main/test.py::test_xml_main_conf test_config_yaml_main/test.py::test_yaml_main_conf test_create_query_constraints/test.py::test_create_query_const_constraints test_create_query_constraints/test.py::test_create_query_minmax_constraints test_custom_settings/test.py::test_custom_settings test_custom_settings/test.py::test_illformed_setting test_ddl_config_hostname/test.py::test_ddl_queue_delete_add_replica test_ddl_worker_replicas/test.py::test_ddl_worker_replicas test_default_role/test.py::test_alter_user test_default_role/test.py::test_set_default_roles test_default_role/test.py::test_wrong_set_default_role 'test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_complex[complex_key_cache]' 'test_dictionaries_all_layouts_separate_sources/test_executable_cache.py::test_simple[cache]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_cache]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_direct]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_complex[complex_key_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_ranged[range_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[cache]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[direct]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[flat]' 'test_dictionaries_all_layouts_separate_sources/test_http.py::test_simple[hashed]' 'test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node0]' 'test_dictionaries_dependency/test.py::test_dependency_via_dictionary_database[node1]' 'test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node0]' 'test_dictionaries_dependency/test.py::test_dependency_via_explicit_table[node1]' 'test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node0]' 'test_dictionaries_dependency/test.py::test_dependency_via_implicit_table[node1]' 'test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node0]' 'test_dictionaries_dependency/test.py::test_dependent_dict_table_distr[node1]' test_dictionaries_dependency/test.py::test_no_lazy_load 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_integers_key_hashed]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_complex_mixed_key_hashed]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_flat]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_hashed]' 'test_dictionaries_select_all/test.py::test_select_all[clickhouse_range_hashed]' 'test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_cache]' 'test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_integers_key_cache]' 'test_dictionaries_select_all/test.py::test_select_all_from_cached[clickhouse_complex_mixed_key_cache]' test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation test_disk_access_storage/test.py::test_alter -vvv" altinityinfra/integration-tests-runner:cd6390247eca ' returned non-zero exit status 1.